![]() biometric access system
专利摘要:
MINIATURIZED OPTICAL BIOMETRIC DETECTION Systems and methods for providing secure biometric access control using an optical biometric sensor in a miniaturized form factor are described. Some implementations include multiple light sources that can illuminate skin or other tissue in multiple locations during a single measurement session. An imaging arrangement can be arranged to form images of the light coming out of the fabric only after being subjected to diffuse reflectance in the fabric. Some implementations use images to perform biometric functions. For example, images can be used to identify an individual, verify an individual's identity, estimate an individual's demographic characteristics, etc. Such biometric functions can additionally be used to determine and affect access to protected assets. 公开号:BR112016007929B1 申请号:R112016007929-9 申请日:2014-10-13 公开日:2021-03-02 发明作者:Robert K. Rowe;Ryan Eric Martin 申请人:Hid Global Corporation; IPC主号:
专利说明:
TECHNICAL FIELD [0001] The modalities generally refer to biometric and, more specifically, to the systems and techniques of optical, miniaturized biometric detection. BACKGROUND [0002] There are many scenarios in which individuals wish to limit or otherwise control access to assets. For example, individuals may wish to control access to data stored on a smartphone or other portable electronic device, to secure locations by door locks or other mechanisms; industrial equipment for use by authorized personnel only; weapons or controlled substances, etc. Some physical and logical access controls are available for such scenarios, including keys, passwords, credentials, etc. More and more individuals and organizations are looking for biometric solutions for access control. However, traditional biometric detection systems can be very large, expensive, unreliable, and / or otherwise undesirable to implement in many contexts. [0003] Specifically, traditional optical biometric approaches tend to be too large to implement in a miniaturized form factor and / or too expensive to implement in commodity-type consumer goods (for example, integrated into a smartphone, a door lock, an industrial equipment button, etc.). In addition, traditional approaches (especially smaller and less expensive approaches) tend to have limited security through broad operating conditions, such as with high variability in moistening or drying the skin of the finger, ambient lighting conditions, etc .; and / or limited ability to distinguish between a genuine skin site and a scam (any of a variety of media and materials presented to the sensor in an attempt to replicate a genuine finger and thereby defeat the security of the system). BRIEF SUMMARY [0004] Among other things, systems and methods are described in this document to provide secure biometric access control using an optical biometric sensor in a miniaturized form factor. Some implementations include multiple light sources that can illuminate skin or other tissue in multiple locations during a single measurement session. An imaging arrangement can be arranged to form images of the light that leaves the fabric after being subjected to diffuse reflectance in the fabric. For example, rather than forming an image of reflected and / or diffused light from the surface of the tissue, or directly from the light sources without interacting with the tissue at all, the imaging arrangement can be implemented only to receive the light that has passed through and interacted with parts of the fabric's subsurface. Some implementations use images to perform biometric functions. For example, images can be used to identify an individual, verify an individual's identity, estimate an individual's demographic characteristics, etc. Such biometric functions can be used additionally to determine and affect access to protected assets (for example, rooms, computer systems, stored digital information, physical storage cabinets, controlled substances, etc.). [0005] According to a set of modalities, a biometric access system is provided. The system includes a biometric sensor that has: an interface subsystem with an interface surface arranged to contact an individual's supposed tissue; a lighting subsystem arranged to pass the source lighting to an individual's supposed tissue through a set of first surface regions of the alleged tissue; and a detector subsystem arranged to acquire a number of images corresponding to a number of conditions of optical image formation, the number of images based on the response illumination that comes out of a set of second surface regions of the supposed tissue, the set of second surface regions being different from the set of first surface regions, the response lighting produced by interactions between the original lighting and the subsurface characteristics of the supposed fabric. BRIEF DESCRIPTION OF THE DRAWINGS [0006] The present disclosure is described together with the attached figures: [0007] The patent or order file contains at least one drawing executed in color. Copies of this patent application or patent publication with color drawings will be provided by the Office upon request and payment of the necessary fee. [0008] Figure 1 shows an illustrative biometric sensor, according to the various modalities; [0009] Figure 2 shows an illustrative traditional biometric sensor for the purpose of added context; [0010] Figure 3 shows a sequence of twelve illustrative images of a finger taken under different conditions of optical image formation; [0011] Figure 4 shows a simplified diagram of an illustrative arrangement used to collect the images in Figure 3; [0012] Figure 5 shows an illustrative biometric sensor, according to the various modalities; [0013] Figure 6 shows another illustrative biometric sensor, according to the various modalities; [0014] Figure 7A shows a set of raw images of a finger, collected with an 8-bit linear image arrangement; [0015] Figure 7B shows a set of raw images collected with an HDR image trainer that reports 14-bit images; [0016] Figure 8 shows an image of processed digital printing, illustrative acquired with the biometric sensor, according to the various modalities (color); [0017] Figure 9 shows an image, similar to the one shown in Figure 8, acquired in a person with distinctly dry skin (color); [0018] Figure 10 shows an image, similar to those shown in Figures 8 and 9, acquired in a person with distinctly moist skin (color); [0019] Figure 11 shows a digital pressure pressure to which the key point detector (color) was applied; [0020] Figure 12 illustrates how an image comparison indicating sets of combination of key points generated in different images acquired from the same finger (color); [0021] Figure 13 shows an illustrative miniature biometric sensor integrated with a physical interface (for example, a “start” button) of a portable electronic device (for example, a smartphone (color); [0022] Figure 14 shows an illustrative miniature biometric sensor integrated with a physical interface (for example, a handle) of a mechanical locking mechanism (for example, a door or drawer lock (color); [0023] Figure 15 shows a block diagram of an illustrative biometric scanning environment having several interconnected systems, according to several modalities; [0024] Figure 16 shows an exemplary computational environment to implement a biometric sensor system, according to several modalities; and [0025] Figure 17 shows a flow diagram of an illustrative method to provide access control using biometric sensor modalities described here. [0026] In the attached figures, components and / or similar characteristics may have the same reference label. In addition, several components of the same type can be distinguished by segment of the reference label by a second label that distinguishes between similar components. If only the first reference label is used in the specification, the description is applicable to any of the similar components having the same first reference label regardless of the second reference label. DETAILED DESCRIPTION [0027] In the following description, several specific details are presented to provide a complete understanding of the various modalities. However, those of ordinary skill in the art will recognize that the invention can be practiced without these specific details. In some cases, the circuits, structures, and techniques were not shown in detail to avoid obscuring the modalities. [0028] In some scenarios, it may be desirable to limit or otherwise control access to assets using a miniaturized biometric sensor. For example, traditional biometric approaches, particularly approaches based on optical sensors, tend to be too large to implement in a miniaturized form factor and / or too expensive to implement in commodity-type consumer goods (e.g., integrated into a smartphone, a door lock, an industrial equipment button, etc.). In addition, traditional approaches (especially smaller and less expensive) tend to have limited reliability through broad operating conditions, such as with high variability in humidity or dryness of the finger skin, ambient lighting conditions, etc., and / or limited ability in distinguishing between the site of genuine skin and a scam (any of a variety of media and materials presented to the sensor in an attempt to replicate a genuine finger and thereby mislead the security of the system). [0029] Some modalities described here provide biometric control of secure access using an optical biometric sensor in a miniaturized form factor. Some implementations include many light sources that can illuminate the skin or other tissue in multiple locations during a single measurement session. An imaging arrangement can be arranged to form images of the light that leaves the fabric after being subjected to diffuse reflectance in the fabric. For example, rather than forming the image of reflected and / or diffused light from the surface of the fabric, or directly from the light sources without interacting with the fabric at all, the image formation arrangement can be implemented only for receive the light that has passed into the subsurface parts and that interacted with the subsurface parts of the fabric. Some implementations use images to perform biometric functions. For example, images can be used to identify an individual, verify an individual's identity, estimate an individual's demographic characteristics, etc. Such biometric functions can also be used to determine and access protected assets (for example, rooms, computer systems, stored digital information, physical storage cabinets, controlled substances, etc.). [0030] Figure 1 shows an illustrative biometric sensor 100 according to various modalities. The biometric sensor 100 includes an interface subsystem 109, an illumination subsystem 103 and a detector subsystem 107. For context, an individual's supposed tissue 101 (for example, a finger, an imitation intended to fraudulently represent a finger, etc.) is shown in contact with an interface surface of interface subsystem 109. Although the so-called fabric 101 is shown in contact with the interface surface, some implementations may operate when the so-called fabric 101 is in partial contact with the interface surface, or in close proximity to the interface surface. In addition, several modalities are implemented as a miniaturized biometric sensor 100, so that the interface surface is dimensioned to accommodate only a small part of a fingerprint. [0031] The supposed fabric 101 (skin, or other fabric having the image formed, etc.) can be human skin. In some cases, the location of the skin with the image being formed may be located on the palmar side of the hand and may include parts of the palm or fingers and thumbs. The terms "finger", "fingerprint" and the like are used here to refer broadly to the skin site (or supposed skin site) having the image formed, although the tissue site may or may not contain skin and additionally may or it does not cover parts of the fingerprints of the fingers or thumbs. [0032] Modalities of the lighting subsystem 103 include a set (that is, one or more) of light sources. For example, multiple light sources can be arranged to pass the source lighting into the supposed fabric 101 via a set of first surface regions 111 of the supposed fabric 101. In one embodiment, the light sources can be diodes of light emission (LEDs) and can additionally be LEDs that have different wavelength characteristics. [0033] Modes of the detector subsystem 107 may include an image former, such as an image forming arrangement. In one embodiment, the imaging arrangement may be a silicon imaging arrangement and may also be a CMOS or CCD imaging arrangement. In some embodiments, the imaging arrangement may include a color filter arrangement such as a Bayer pattern. In other embodiments, the imaging arrangement may omit the color filter arrangement, and the arrangement elements may be individually sensitive to a wide distribution of light wavelengths. Other implementations can include any suitable optics, such as color filters, polarization filters, lenses, mirrors, etc. Detector subsystem 107 modalities are arranged to acquire at least one image (e.g., a single image, a stack of images, etc.) corresponding to a plurality of optical imaging conditions. At least one image can be based on response illumination coming out of a set of second surface regions 113 of the supposed tissue 101. [0034] Various embodiments of the lighting subsystem 103 and detector subsystem 107 may include (for example, or be the same in optical communication with interface subsystem elements 109 that include) optical elements for relaying lighting to and / or from the supposed fabric 101. For example, lighting can be relayed from lighting subsystem 103 to alleged fabric 101 and / or from supposed fabric 101 to detector subsystem 107 using one or more lenses, micro lens arrangements, mirrors, optical fibers, gradient index lenses (GRIN), micro lenses of the SELFOC type, and / or any suitable combination of optical elements. Additionally, in some implementations, the lighting elements of the lighting subsystem 103 and image forming elements of the detector subsystem 107 may not be coplanar, so that different approaches can be used to relay the lighting to and / or from of the so-called fabric 101. For example, in one implementation, fiber optic waveguides can be used to couple the source lighting from multiple lighting sources from the lighting subsystem 103 to the so-called fabric 101, and a micro array lenses can be used to direct the diffuse reflected response illumination from the supposed fabric 101 to an imaging arrangement of detector subsystem 107. [0035] Figure 2 shows a traditional illustrative biometric sensor 200 for added context. As shown, the traditional biometric sensor 200 can include light sources 203, a detector 207 and a plate 209. The alleged fabric 101 in contact with plate 209 can be illuminated by light sources 203 via plate 209, and the light reflected and / or diffused by the surface of the supposed fabric 101 can have the image formed by the detector 207. In such a traditional biometric sensor 200, a wide area of the supposed fabric 101 is illuminated by the light sources 203 through the plate 209 (for example, using lighting optics 205). The widely illuminated area can then have the image formed by detector 207 using image-forming optics 211 through plate 209. There is generally no requirement that light from light sources 203 propagate through the subsurface parts of the alleged tissue 101. In reality, many such traditional biometric sensors 200 are based on total internal reflectance (IRR) and / or other optical effects at the interface between the surface of the supposed fabric 101 and the plate (for example, the difference in refractive index between a dish-air interface and a dish-fabric interface). [0036] With reference to Figure 1, in contrast to the traditional implementation shown in Figure 2, modalities are implemented so that the first surface regions 111 are optically separated from the second surface regions 113 of the supposed fabric 101. In reality, the Supposed tissue 101 can essentially become the light source from the perspective of detector subsystem 107. For example, as illustrated, optical blockers 105 can be used to substantially block source illumination so as not to reach detection subsystem 107 without passing first through the supposed fabric 101. Such optical isolation can be accomplished in several ways and does not necessarily require separate optical components for this purpose. For example, the source illumination transmitted from the lighting subsystem 103 to the supposed fabric 101 and / or from the supposed fabric 101 to the detector subsystem 107 can be collimated, focused, coherent or otherwise arranged, in such a way so that multiple interactions and / or optical diffusion events will generally occur within the alleged fabric 101 before source illumination can reach detector subsystem 107. Additionally or alternatively, the paths to and from the alleged fabric 101 can be isolated by through the use of shields, masks, light tubes, filters, opaque coatings, etc. In one embodiment, the lighting sources can be highly directional and positioned either in contact with the optical fiber light guides, a front plate of optical fibers, in direct contact with the supposed fabric 101, etc. Such configurations reduce or substantially eliminate the amount of light that can pass from the lighting subsystem 103 to the detector subsystem 107 without being diffused by the supposed fabric 101. [0037] In general, the source illumination enters the supposed fabric 101 and is subjected to diffuse reflectance below the surface of the skin. During such subsurface interactions, the lighting is affected by diffusion, absorption, changes in refractive index, and other such optical characteristics of the supposed fabric 101. After the lighting is subjected to diffuse reflection below the surface of the supposed fabric 101, a part of the lighting leaves the supposed fabric 101 in the direction of the lighting subsystem 103. The lighting that comes out (“answer”) is effectively a part of the source lighting that entered the supposed fabric 101 and went through optical subsurface interactions (for example, diffuse reflectance) . The outgoing response illumination can pass through the lighting subsystem 103 and be used to form through the detector subsystem 107. [0038] In some implementations, the interface subsystem 109 includes a fiber optic front plate. For example, a portion of the response illumination leaves the supposed fabric 101 within an acceptance angle of the front fiber optic plate. Such light can pass through the front plate and reach an image formation arrangement of detector subsystem 107, from which detector subsystem 107 can form an image (for example, an array of values corresponding to the sensor position of the array. bidimensional when the sensor position is spatially correlated to the parts of the skin having the image formed). [0039] Various modalities include multiple lighting sources that are arranged around the periphery of an image forming region of interface subsystem 109 (for example, corresponding substantially to the second surface regions 113). Alternatively or additionally, lighting from some or all of these sources is guided to enter the supposed fabric 101 at locations around the periphery of the image-forming region of interface subsystem 109. As described above, the fiber front plate Optics can be used to relay optically the characteristics of the region of the alleged tissue 101 having the image formed for an image formation arrangement. In another embodiment, one or more lenses, array of micro lenses, mirrors, gradient index lenses (GRIN), SELFOC-type micro lenses, and / or any suitable combination of optical elements can be used to optically relay the characteristics of the region of the alleged fabric 101 having the image formed for the imaging arrangement or other components of the detector subsystem 107. [0040] Different types of lighting (for example, different wavelengths, intensities, etc.) have certain characteristics of propagation through the fabric. For example, some illumination wavelengths experience a generally small amount of optical diffusion in the skin tissue and / or the diffusion is non-isotropic. Consequently, the number and positions of light sources, wavelengths and / or intensities of light sources, lighting angles and / or other geometries, additional optics (eg filters, lenses, etc.), exposure times, maximum size of an image builder of detector subsystem 107 (for example, an imaging arrangement), and / or other characteristics of the biometric sensor 100 can be selected to be compatible with such optical propagation characteristics of human finger tissue , alive, or similar. For example, the biometric sensor 100 can be designed to have an optical distance between the light sources and the image format that corresponds to an average path traveled by the lighting through the supposed fabric 101 as it moves from a point entry point to an exit point through the supposed fabric 101. [0041] In some modalities, a single image or set of images (for example, considered in multiple exposure values) is required when each light source is illuminated. For example, a stack of images can be generated, and each image in the stack of images can correspond to a specific lighting condition. In certain implementations, images are acquired for different exposure times in each lighting condition. For example, each of a number of lighting conditions is activated in three exposure times (for example, 5, 10 and 20 milliseconds), and at least one corresponding image is acquired for each lighting condition in each exposure time. In some embodiments, multiple light sources can be illuminated during the acquisition of a single image. In general, the multiple images acquired from multiple lighting conditions can be affected by different influences of diffusion, absorbance, refractive indices, and other optical characteristics in the supposed fabric 101 having the image formed, as well as geometric differences between the lighting area and components of detector subsystem 107 (e.g., imaging arrangement), etc. [0042] Images can be processed separately or combined in some way. In one embodiment, each of the images corresponding to the lighting sources with substantially the same wavelength characteristics is combined together. In one embodiment, the multiple lighting sources include red, green and blue LEDs, and the resulting images are combined together to form a color image that has red, green and blue components. In another mode, all the images corresponding to the different lighting conditions are combined together to form a single monochrome image. [0043] In some modalities, multiple images are processed to perform a biometric function. In some embodiments, the biometric function includes identifying a person or verifying a person's identity. In some modalities, the biometric function can be the estimation of the demographic characteristics of the person touching the sensor. Such characteristics may include an estimate of the person's age, gender, ethnicity, profession, etc. [0044] In some embodiments, the biometric function may include fraud detection, such as confirmation that the sample being presented to the sensor includes genuine tissue (for example, of the type expected from a human, live finger); rather than an attempt to use an altered finger, artificial material, or other means to circumvent the safety of the sensor (for example, non-human tissue, non-living human tissue, etc.). Fraud detection can be carried out in several ways. In some implementations, machine learning is used to develop model criteria to distinguish living, genuine tissue from fraud. Some modalities use a color contrast and / or a color decline model for fraud detection. For example, implementations can illuminate the finger with multiple colors, and genuine tissue can manifest a decrease in the characteristic of each color over distance (for example, a spatial frequency of a high-frequency structure), which could not be present in some or in all categories of imitations. Some implementations use multiple imitation detection techniques together. For example, some imitation detection techniques are used to detect certain categories of imitations, thereby narrowing the number of imitations to which machine learning and / or other techniques are applied. [0045] In some modalities, multiple images can be processed to filter and emphasize certain spatial frequencies or other image characteristics. In some embodiments, the image characteristics corresponding to the lower spatial frequencies can be used individually or in combination with other instrumental characteristics to perform imitation detection as well as identity and / or identification verification and other such tasks. In some embodiments, the higher spatial frequencies of the plurality of images can be combined together to form a representation of the skin's dermatoglyphic patterns. In some embodiments, such a fingerprint image (s) can be analyzed to perform identity verification, and / or identification, as well as to perform imitation detection and other such tasks. [0046] In some modalities, the multiple images acquired from a finger are analyzed to determine a degree of touch. For example, image stack analysis can indicate whether the user is lightly touching the sensor or is pressing the sensor hard. [0047] Figure 3 shows a sequence of twelve illustrative images of a finger taken under different conditions of optical image formation (for example, different lighting conditions). Figure 4 shows a simplified diagram of an illustrative arrangement used to collect the 301 images in Figure 3. Four LED 401 packages are arranged to illuminate the periphery of a 403 imaging area (for example, an imaging region interface subsystem). Each of the 401 LED packages includes a RGB trio (red, green, blue) of LED matrix that can be controlled separately to emit varying amounts of red, green and / or blue light from each of the 401 packages. acquire the images shown in Figure 3, all LED 401 packages can be controlled to emit a single color. For example, images 301a, 301b, 301c, 301d are acquired illustratively using the LEDs emitting blue light; images 301e, 301f, 301g, 301h are acquired illustratively using the LEDs emitting green light; and images 301i, 301j, 301k, 301l are acquired illustratively using the LEDs emitting red light. [0048] As can be seen from the 301 images, the intensity of light passing through and being diffused by the subsurface skin fades relatively quickly as the distance from the light source to the point increases of the skin having the image formed. In addition, light from the blue and green LEDs generally travels less far through the skin than light from the red LED. For that reason, some implementations acquire 301 images corresponding to blue or green lighting using simultaneous lighting from multiple blue or green 401 LED packages, while 301 images corresponding to blue or green lighting can be acquired using a smaller number of packages LED (for example, a single one) 401. For example, image 301a was acquired with the blue LEDs of the illuminated 401a, 401d, 401c packages; the 301g image was acquired with the green LEDs of the 401a, 401d and 401c packages illuminated; and the 301i image was acquired with the red LED of the 401a package illuminated. Other configurations and combinations of LEDs illuminated by raw image can be used with these and / or other implementations. [0049] For the purpose of illustration, the object-space pixel resolution of the 301 images in Figure 3 is approximately 1500 pixels per inch (ppi). Such a relatively high resolution can be beneficial for acquiring and using very precise details of a fingerprint, including the pores, crest shape, fledging ridges and other details (for example, known in the art as "level III" information). Such a high resolution can also be useful for acquiring biometric information from babies, young children and others who have very fine or delicate fingerprint structures. Many standard, current fingerprint sensors are designed with a spatial resolution of approximately 500 ppi, and some fingerprint sensors that are considered to be high resolution commonly support 1000 ppi image formation. Various modalities of the biometric sensor 100, described here, can be implemented with a wide range of image resolutions from below 100 ppi to more than 2000 ppi, which can be used advantageously in a variety of applications. [0050] The size of the skin region having the image formed in the 301 images is approximately 0.22 by 0.22 inches. As described above, due to the relatively short spread of blue and green light across the skin, each of the blue and green images can be illuminated with fewer light sources than those used for red images. The relatively greater spread of red light through human skin is partly due to the lower absorbance of blood at wavelengths of red compared to those in blue or green. Other implementations may use alternative and / or additional wavelengths (for example, including visible combinations of light, infrared lighting, broadband light sources, etc.). For example, infrared illumination light can enable the formation of an image of a larger area of the skin than with visible wavelengths, since the propagation of infrared light at certain wavelengths can be longer than that of light red. Wavelengths from near infrared to cutting silicon detectors (~ 1300nm) can be advantageously employed to illuminate the skin. In addition to the ability to use some near-infrared wavelengths to illuminate larger areas of the skin than are practicable with equivalent, visible light sources, non-visible illumination may be desirable for certain applications. In other modalities, infrared and visible lighting, and even very close to ultraviolet lighting can be combined together advantageously for various biometric tasks such as fraud detection and the estimation of various demographic parameters as well as for identification and verification. [0051] The 301 images shown in Figure 3 were acquired using a 403 monochrome silicon image former. Alternatively, the 403 image former may be implemented as a color image former of various types. For example, the color image maker may use a color filter arrangement having color filter elements that cover each pixel of the image in such a way that each pixel "sees" only the selected wavelengths. Such an array of color filters is known as a Bayer pattern comprised of red, green and blue color filter elements, although other variants can be used. In cases where the image maker 403 is a color image maker, multiple wavelengths of illumination can be illuminated during each image acquisition (for example, during each acquisition session, frame, etc.). For example, one or more blue LEDs can be connected simultaneously with one or more red LEDs and the two lighting conditions can be separated by extracting the blue and red pixels from the image array, or they can be interpolated to form an RGB representation. In other embodiments, broad spectrum illumination such as white light LEDs can be used for illumination with a color image former (such broad spectrum illuminators may also be used with a monochrome image former, for example, with the spectral content integrated by the image trainer rather than separate as in the case of a color image trainer). In some cases, infrared lighting can be used advantageously in conjunction with visible illuminators and a color image former, for example, because many such color filter arrangements tend to pass infrared light with little or no discrimination between some or all visible light filter elements. As such, a color image former can act to distinguish between colors in the visible region, but they can also have properties of a monochromatic arrangement when illuminated with appropriate infrared light. [0052] As used here, phrases such as: "optical conditions", "lighting conditions", "optical imaging conditions", or the like, can generally refer to differences in illumination wavelengths (for example, individual wavelengths, combinations of wavelengths, etc.), lighting geometries (for example, positions, angles, etc.), lighting patterns, lighting levels (for example, intensities, exposure times, etc.) , acquisition characteristics (for example, active filter configurations, etc.), and / or any other suitable optical environmental differences between a plurality of acquired images. In addition, such a plurality of images can be acquired in a plurality of image frames or using a single image frame. A single image frame can be used (for example, in the case of an image trainer with a color filter arrangement that can simultaneously collect light from two or more light sources in different positions and different colors). Such multiple lighting conditions can then be separated from the single image by various means. For example, if the different light sources are substantially monochromatic with wavelengths that correspond to the different color filters of a Bayer color filter arrangement, then the individual, single-color subarrays can be extracted from the resulting image. Alternatively, the raw color image can be interpolated to produce a standard RGB color image. The color planes of such an image can then be analyzed separately or combined together in a variety of forms in accordance with the present invention. [0053] Figure 5 shows an illustrative biometric sensor 500, according to several modalities. The biometric sensor 500 can be an alternative implementation of the biometric sensor 100 shown in Figure 1 and / or illustrative implementations of the arrangement 400 shown in Figure 4, and such biometric sensor 500 can be used, for example, to obtain the images 300 shown in Figure 3. As illustrated, the biometric sensor 500 includes an interface subsystem (for example, a plate 501), a lighting subsystem (for example, having a number of light sources 505 and optical guides 503) and a detector subsystem ( for example, having an image trainer 509 and image forming optics 507). The 505 light source can be LEDs, laser diodes, quantum dots, incandescent sources, and / or any other suitable lighting source. The 503 optical guides (for example, optical fibers, waveguides, light tubes, diffusers, polarizers, optical filters, and / or any other suitable optical guides, or the like) can optically couple the source lighting from the lighting sources 505 at the edges of an image-forming area (for example, corresponding to plate 501). The source illumination can enter the supposed fabric (not shown) through optical guides 503, the incoming illumination can be subjected to diffuse subsurface reflectance in the supposed fabric, and part of the illumination can exit through the plate 501 as a response illumination . Response illumination can be directed via plate 501 and image forming optics 507 (for example, a lens, multiple lenses, a micro lens array, flat mirrors, focusing mirrors, polarizers, GRIN lenses, and / or another suitable image-forming optic 507) for the image-maker 509 (e.g., image-forming arrangement). [0054] Figure 6 shows another illustrative biometric sensor 600, according to several modalities. Biometric sensor 600 can be an alternative implementation of biometric sensor 100 in Figure 1, biometric sensor 500 in Figure 5, etc. As illustrated, the biometric sensor 600 includes an interface subsystem (for example, a fiber optic front plate 601), a lighting subsystem (for example, having a number of light sources 505), and a detector subsystem (for example, example, having an image trainer 509). The source illumination from the 505 illumination sources can enter the supposed fabric (not shown) through one or more first (for example, peripheral) regions of the front fiber optic plate 601, the incoming illumination can be subjected to reflectance diffuse subsurface in the supposed fabric, and part of the illumination may come out of the supposed fabric in one or more second (for example, central) regions of the front fiber optic plate 601 as response lighting. The response illumination can be directed through the front fiber optic plate 601 to the image trainer 509. Alternatively, the light sources 505 can illuminate the alleged fabric directly (for example, they can be configured to contact the alleged fabric directly or through a thin layer of protection, etc.), or through waveguides, light tubes, etc. The front fiber optic plate 601 can be replaced with an array of micro lenses, lenses, mirrors, and / or other suitable optical interface. [0055] In the various modalities, the image former used to collect raw images may have a linear response with respect to the intensity of light, or may have a non-linear response of a certain type. For example, an image trainer capable of high dynamic range image formation (HDR) can be used to enable light to be collected over a longer image distance, while maintaining levels of gray that are not saturated or are in the noise floor of the image formation system. Figure 7A shows a set of raw images 700a from a finger collected with an 8-bit linear imaging arrangement. Figure 7B shows a set of raw images 700b collected with an HDR image former that reports 14-bit images. The two figures are displayed as log10 transformations of the actual bit levels to facilitate visual examination. As can be seen, HDR 700b images have smaller regions of saturation and smaller regions of dark pixel values, compared to the corresponding linear images 700a. [0056] Raw images similar to those shown in Figure 3, Figure 7A and Figure 7B can be analyzed in different ways to perform biometric tasks. In one embodiment, raw images can be mathematically decomposed to quantify spectral and texture characteristics, and these characteristics can be used independently or in conjunction with other information to perform identity or identification verification; fraud detection; estimation of demographic characteristics of age, gender, ethnicity, and other such parameters; etc. Some implementations can perform such characterization using a mathematical decomposition, such as principal component factor analysis, to generate factors for each image plane considered through a representative set of finger images. Other implementations may use Fourier analysis for decomposition, which can be used to find the amount of energy in the raw images that are contained in certain spatial frequency ranges and / or angular frequency ranges. Other implementations may use wavelet decomposition for such purposes, including, but not limited to, the use of complex dual-tree wavelets. Alternatively, Gabor filters, Laplacian filters, and / or other suitable decomposition techniques can be used to decompose and quantify the elements of the raw images. [0057] When raw images are decomposed and quantified, the resulting decomposition coefficients can be used in a variety of classification methods to perform such biometric tasks as fraud detection, gender estimation, ethnicity estimation, identity verification, identification , and / or other classifications. In some cases, the decomposition coefficients may be increased by other values, such as instrumental and environmental parameters, which may include image maker gain and exposure settings, drive currents and pulse duration of the LEDs (or comparable characteristics of others types of light sources), ambient temperatures or some part of the sensor itself, humidity measurements, ambient light measurements, various parameters that can be measured during sensor manufacture (for example, flat field, CD offset, color, etc.), etc. [0058] In another mode, the raw images can be analyzed to estimate the pressure being used by the user to press the sensor. Changes in the physiology of a user's finger during touch can be manifested in the raw images. These changes can then be quantified through mathematical decomposition of the raw images. The magnitudes of the resulting decomposition can then be used in conjunction with a variety of regression or classification algorithms to estimate the pressure exerted by the finger or a qualitative measure such as "light touch" and "strong touch". Such pressure estimates can be used, for example, to determine appropriate biometric templates to use for comparison with acquired images, to determine whether the pressure is characteristic of the individual, to help detect certain types of fraud attempts, etc. [0059] In another modality, the raw images can be processed and combined in different ways to produce representations that emphasize the coarse and refined details of the digital impression. For example, the compilation of images similar to those in Figure 3 can be transformed by taking the logarithm of the gray values. This set of logarithms - images can then be filtered in a number of ways to separate fingerprint characteristics and other appropriate spatial details. For example, a set of smoothed logarithms - images can be generated by convolution of the logarithms-images with a Gaussian nucleus of a certain size. The smoothed images can then be subtracted from the original images to produce a set of images that emphasizes the delicate details. These resulting images can then be averaged together according to the illumination wavelength (for example, by averaging all images illuminated by red light separately from those acquired using green light and blue light). The resulting image can then be displayed as a single color image by concatenating the medium red, green and blue images. [0060] Figure 8 shows an image of processed digital fingerprint, illustrative 800 acquired with a biometric sensor, according to several modalities. The 800 image was generated using a technique similar to the one described above. However, there are a large number of different mathematical operations that can be performed in many different combinations and orders to obtain similar results. The raw images used to generate the image 800 in Figure 8 can be represented in several alternative ways, including by applying an inverse correlation stretching algorithm through all the original images (for example, 12), or alternatively by combining all of them the original images into a single signal gray scale image. Although any such operation may be useful for certain functions, the RGB format in Figure 8 can be particularly useful for demonstrating an observation made by the inventors regarding how the pores and other fingerprint structures manifest themselves as a function of the wavelength of lighting. For example, there are a large number of pores visible in Figure 8 (one of which is labeled as pore 801). As can be seen, these precise details are represented as a different color than the surrounding crest 803 or is worth 805 of the fingerprint. The valley 805 is relatively dark and gray, which may indicate that the valleys are relatively dark characteristics under all three wavelengths of light used to generate the 800 image. As a comparison, the 803 ridge turns a red color, the which may indicate that the ridges become disproportionately lit under red lighting compared to blue and green lighting. Additionally, pore 801 is cyan, which may indicate that pore 801 is relatively light under blue and green lighting, but relatively dark under red lighting. Such differences suggest that it may be advantageous to analyze such differently lit images separately or in a way that maintains the information, even when the contrast of a characteristic changes polarity through illumination wavelengths or other conditions. [0061] Figure 9 shows an image 900, similar to the one shown in Figure 8, acquired in a person with distinctly dry skin. Although the pores are less pronounced than those in Figure 8, it is evident that different characteristics of the skin are manifested differently under the three wavelengths of illumination. For example, as illustrated, the 901 vouchers are a different color (blue) than the 903 ridges (red). [0062] Figure 10 shows an image 1000, similar to those shown in Figures 8 and 9, acquired in a person with distinctly moist skin. Experimentally, this image was acquired after an individual's finger was dipped in a container of water and then immediately placed on a biometric sensor plate without first wiping, or otherwise removing water from the fingertip. As can be seen, the details of the fingerprint are well resolved under this extreme wet condition after the same procedure outlined above. [0063] Some modalities include one or more protruding point detectors (for example, “key point” detectors, “corner” detectors, etc.) to process raw data, or to further process processed data from the acquired images. Figure 11 shows an 1100 fingerprint image to which a key point detector has been applied. The circles show the detected key points 1101 and the lines show the orientation of the key points 1101. For example, each key point 1101 can be characterized by the dominant direction of the local fingerprint characteristics. Each key point 1101 can be further characterized by a summary of the fingerprint characteristics proximal to the key point. Certain implementations can exploit the wide apparent inconsistencies between the micro-level flows (for example, pixel level) and the macro-level flows. For example, implementations can determine key points 1101 by computing a corner on a color gradient (for example, using a Harris corner detector approach or another structure tensor approach, etc.). [0064] In some embodiments, each key point 1101 can be combined with other key points 1101 that were generated from a plurality of images generated during a second measurement session to determine whether the two fingers used in the two measurement sessions they were probably the same finger. For example, the first key points 1101 can be detected (computed) in a set of images acquired during an individual's registration, the second key points 1101 can be detected in a set of images acquired during the individual's subsequent authentication, and authentication it can be based at least in part on the comparison of the first and second key points 1101. [0065] The images can be combined by comparing the key points 1101 in several ways. For example, combination algorithms, such as SIFT (Scale of Characteristic Transform - Invariable), SURF (Robust Acceleration Characteristic), and / or other techniques, can be used to determine a combination. Figure 12 illustrates an image comparison 1200 indicating combination sets and key points 1201 generated in different images acquired from the same finger. Before combining the digital print pattern, the first image being combined can be generated by creating a mosaic (for example, a pixel overlay or composite) of individual images known to come from the same finger. Such a composite image can be generated during registration by an authorized user, for example, who would be prompted to touch the sensor multiple times. The resulting set of images can then be combined to form a composite image using a combination technique, such as those described above. The resulting composite registration image can cover an area larger than any image and therefore can be more tolerant of differences in finger placement during subsequent user identification or verification. [0066] Although in certain modalities described above, the lighting subsystem and the detection subsystem can make contact with the skin, either or both subsystems can be configured in such a way that no skin contact is necessary. For example, the lighting subsystem may include a light source and some means to focus the light on a point or other shape using a lens, lens, mirrors and / or other such means. This focused light can be incident on a region of the skin that is different from the region having an image formed. Specifically, the focused illumination light can be incident near an end of the region having the image formed. Similarly, the imaging of the desired skin region can be performed without contact by using one or more lenses, mirrors, and / or other such means to form the image of the skin in the image arrangement. [0067] As described above, some modalities can be implemented as miniaturized biometric sensors for integration in other mechanical and / or electronic systems. Some such integrations are illustrated in Figures 13 and 14. Figure 13 shows an illustrative miniature biometric sensor integrated with a physical interface 1301 (for example, a “start” button) of a portable electronic device (for example, a smartphone). Alternatively, the sensor can be integrated with the portable electronic device in other ways, for example, as a separate feature from any other interface element. In an implementation, the sensor can be integrated into a mechanical button that can be pressed to make an electrical contact and also capable of acquiring and combining the user's biometric information. For example, pressing the button can activate the sensor, causing one or more biometric images to be acquired through the sensor. The biometric sensor can then determine whether the button has been pressed by a genuine finger (as opposed to an imitation, another part of the body, etc.) and / or can perform one or more biometric functions, such as user identification, verification of a user identity, estimation of user demographic characteristics (eg age, gender, ethnicity, profession, etc.) etc. In other implementations, the presence of a finger can be detected without pressing the button, so that the sensor is activated only to capture an image after the presence of the finger is detected (for example, or when alerted, when requested, etc.). For example, presence detection can be based on capacitive detection, resistive detection, impedance detection, mechanical detection, and / or any other suitable techniques. In addition or alternatively, presence detection may involve monitoring the array formation to determine whether the array is receiving illumination, whether the spectral content is compatible with the source output, whether the detector output is consistent with receiving lighting from a person's skin, etc. [0068] Figure 13 also shows an exploded view of the illustrative implementation of the miniaturized biometric sensor. As illustrated, the sensor includes an optical fiber front plate 1303, an imaging formation 1307, light sources 1309, and presence detection means 1305. The sensor is combined with a mechanical switch 1311. The image 1307 can be a silicon arrangement and can include a color filter arrangement. The 1309 light sources can be LEDs and can be monochromatic in a variety of colors, white light LEDs, trios or other matrix combinations, etc. The mechanical switch 1311 can be a membrane switch, a piezoelectric switch, and / or other suitable switch. In some implementations, information from the 1307 imaging array can be processed using integrated processing, processing capabilities of the portable electronic device, and / or external resources (for example, accessed via the wireless network) to implement any one among biometric functionality and / or other functionality described above. For example, the biometric functionality facilitated by the sensor can provide secure access to the portable electronic device, and / or can thus capture the biometric information for use by a system other than the portable electronic device (for example, a secure external system, such as a system external computer, automated teller machine (ATM), automobile or heavy machinery control, protected area, retail point of sale system, keyless entry device, etc.). Alternatively, modalities can be combined with the commonly used camera functionality of a smartphone or other mobile device or as part of the integrated network camera functionality on many computer monitors and other devices. The construction of such a device may include a variable focus lens to focus the finger for biometric detection and / or to change the focus for imaging distant external objects for commonly used imaging applications. [0069] Many other access control applications can be implemented by integrating modalities of the miniaturized biometric sensor described here. Figure 14 shows an illustrative miniature biometric sensor integrated with a physical interface 1401 (for example, a handle) of a mechanical locking mechanism (for example, a door or drawer lock). For example, the sensor can be sized to fit into an opening for the cylinder of a conventional key-based locking mechanism (for example, generally having a diameter of no more than approximately 1/2 inch and a depth of no more than than 2 inches), and can be used retroactively on existing hardware or custom-designed on new hardware. In either case, the logic of the sensor system can activate conventional electromechanical elements to activate the locking / unlocking mechanisms. In the illustrated implementation, the mechanical locking assembly is movable between the locked position and the unlocked position to enable and disable the opening of a restricted access area. A knob 1401 is provided to open the restricted access area when unlocked, and the biometric sensor is physically integrated with the knob and in communication with the mechanical locking assembly, so that the mechanical locking assembly is movable to its unlocked position. according to whether the individual is biometrically authorized by the biometric sensor. There are similar physical access applications in many segments including automotive and / or industrial equipment (for example, integrated into instrument controls, cabin access, etc.); service and repair of sensitive systems; handling hazardous or otherwise controlled material; access to weapon systems (for example, integrated into a trigger or security mechanism, a storage cabinet, etc.), etc. [0070] Figure 15 shows a block diagram of a biometric scanning environment, illustrative 1500 having several interconnected systems, according to different modalities. Environment 1500 includes a miniature sensor system 1510, which can be implemented as described herein. For example, the miniature sensor system 1510 includes an interface subsystem 1515, a lighting subsystem 1570, and a detection subsystem 1560. In some implementations, the functionality of the lighting subsystem 1570 and / or the detection subsystem 1560 can be controlled, at least in part, by a set of (i.e., one or more) 1505 processors. 1505 processors can include dedicated controllers (for example, a lighting controller, LED driver, image forming controller, graphics processor , etc.), commonly used central processing units (CPUs), or any other suitable type of 1505 processor. Additionally, as illustrated, 1505 processors can be integrated with the 1510 and / or “external” miniaturized sensor system ”(For example, the 1510 miniaturized sensor system can use the processing functionality of the interconnected systems). [0071] As illustrated, modalities of the miniature sensor system 1510 can be in communication with one or more registration systems 1550, access control systems 1520, storage systems 1527, etc. Such communication can be through direct integration (for example, the miniature sensor system 1510 can be integrated into an access system 1520, such as an electromechanical lock), through direct connection (for example, through an exclusive wired connection) , or wireless), through one or more 1585 networks (for example, wired or wireless, local or remote, secure or unsecured, etc.), and / or in any other way. [0072] In some modalities, an individual registers a fingerprint using a 1510 miniaturized sensor system. For example, during a registration routine, implementations capture multiple samples (for example, ten or more) of the fingerprint of the registrant. Some implementations include a user interface or a means of communicating the user interface to alert the registrant to provide multiple samples (for example, by alerting to place your finger on the sensor, multiple times, each time acquiring a sample and indicating if the sample was acquired in an acceptable manner). The multiple samples can be used as individual templates for comparison against future authentication attempts and / or combined to form larger area templates for future comparison. For example, log data can be stored on the storage system 1527 (for example, using integrated storage from the miniature sensor system 1510, network storage, storage resources from another interconnected system, etc.). When the individual subsequently attempts to access again using the miniature sensor system 1510, the miniature sensor system 1510 can acquire new biometric data and compare the new biometric data with the stored biometric registration data to determine whether to authenticate the individual. [0073] In other embodiments, an individual registers a fingerprint on a separate 1550 registration system. Some 1550 registration systems may include a larger area scanner, such as a conventional optical fingerprint reader (for example, using TIR techniques , multispectral techniques, etc.), larger area sensors (for example, configured to scan multiple fingerprints, palm prints, etc.), and / or other types of biometric scanners (for example, iris, voice recorders, etc.). For example, the 1510 miniaturized sensor system can be scaled to image only a first part of a fingerprint; so that safe future combining can be facilitated by comparing the small part of the image formed with a larger part previously with the image formed from the fingerprint. This can be accomplished with the 1510 miniaturized sensor system alone by capturing some fingerprint images during registration (for example, and changing them together to form a larger region of the formed image, or using the images together for comparison) . However, a larger area scanner can form an image of a larger region of the fingerprint by acquiring a smaller number of sample images (for example, one). For example, this can speed up the registration process and / or increase your security. [0074] In particular, the images and / or processed data acquired during registration can be used as templates for comparison with future attempts to authenticate with one or more 1510 miniaturized sensor system. In addition, such biometric templates can be stored in a system of centralized storage 1527 (ie, a system accessible by one or more instances of the miniature sensor system 1510). For example, a large facility may include a single 1550 registration system (for example, a larger area scanner in a security office) in communication with a highly secure 1527 storage system that maintains biometric templates and / or other data ( for example, demographics, access permissions, and / or other data about individuals that can be stored in association with your registered biometric data). The large installation can also include many controlled access assets (for example, rooms, storage cabinets, computer systems, etc.), individually protected by a networked instance of the 1510 miniaturized sensor system. When an individual having access one of the controlled access assets during the respective miniature sensor system instance 1510, the biometrics acquired can be compared with the log data securely stored to determine and perform access privileges. [0075] Similar functionality can be provided if an instance of the 1510 miniaturized sensor system is used for registration. For example, an individual can enroll his fingerprint using a 1510 miniaturized sensor instance integrated into a portable electronic device. When the individual attempts to access an asset controlled by a second instance of the 1510 miniaturized sensor system, the second instance can acquire new biometric data from the individual, communicate with the portable electronic device (or other external storage medium), and compare registration data to determine and perform access. [0076] Diverse functionality described above can be implemented in one or more computing environments. Figure 16 shows an exemplary computing environment 1600 for implementing a biometric sensor system, according to several modalities. The 1600 computing environment can be implemented as or incorporated into a single computer system or distributed computer systems, or in any other useful way. The computing environment 1600 is shown including hardware elements that can be electrically coupled via a 1655 bus. [0077] The hardware elements may include one or more central processing units (CPUs) and / or another 1505 processor (s) (for example, as described with reference to Figure 15). Implementations can also include one or more 1610 input / output devices, which can include and be integrated with a 1515 interface subsystem, as described above. Some implementations also include a 1607 power subsystem, including any suitable force storage medium; power electronics, power interfaces, etc. Some implementations may allow data to be exchanged, via a 1680 communication subsystem, with one or more 1585 networks and / or any other computer or external system (for example, as described above with respect to Figure 15). The 1680 communication subsystem can include a modem, a network card (wireless or wired), an infrared communication device, and / or any other suitable component or combination thereof. [0078] The computing environment 1600 can also include one or more 1620 storage devices. As an example, the 1620 storage device (s) can be disk drives, optical storage devices, solid state storage device such as a random access memory (RAM) and / or a reading memory (ROM), which can be programmable, upgradeable-flash and / or similar. The computing environment 1600 may additionally include reader-readable storage media from computer-readable storage media 1625a, and working memory 1640, which may include RAM and ROM devices as described above. The computer-readable storage media reader 1625a can be additionally connected to a computer-readable storage medium 1625b, together (and optionally in combination with the storage device (s) 1620) comprehensively representing remote storage devices , local, fixed, and / or removable plus storage media to contain temporarily and / or more permanently computer-readable information. Storage device (s) 1620, computer-readable storage media and media reader 1625, and / or working memory 1640 can be implemented as a storage subsystem 1527 (for example, as shown in Figure 15) . In some embodiments, the computing environment 1600 may also include a 1635 processing acceleration unit, which may include a DSP, a special-purpose processor and / or the like. [0079] The computing environment 1600 may also include software elements, shown to be currently located within a 1640 working memory; including a 1645 operating system and / or other 1650 code, such as an application program (which can be a client application, web browser, middle tier application, etc.). For example, modalities can be implemented as instructions, which, when executed by one or more 1505 processors, cause the 1505 processors to perform certain functions. Such functions may include functionality of a lighting controller 1673 (which can guide the operation of lighting elements 1675 as part of a lighting subsystem 1570) and a detection controller 1663 (which can direct the operation of detection elements 1665 as part of a 1560 detection subsystem), for example, as described above. [0080] A software module can be a single instruction, or many instructions, and can be distributed over several different code segments, between different programs, and through multiple storage media. Thus, a computer program product can perform operations presented here. For example, such a computer program product may be a computer-readable tangible medium having instructions stored in a tangible (and / or encoded) form therein, the instructions being able to be executed by one or more processors to perform the operations described herein. The computer program product may include packaging material. Software or instructions can also be transmitted via a transmission medium. For example, software can be transmitted from a network site, server, or other remote source using a transmission medium such as a coaxial cable, fiber optic cable, twisted wire pair, digital subscriber line (DSL) or wireless technology such as infrared, radio or microwave. [0081] Alternative modalities of a 1600 computing environment can have several variations from the one described above. For example, custom hardware could also be used and / or specific elements could be implemented in hardware, software (including portable software, such as applets), or both. In addition, connection to other computing devices such as network input / output devices can be employed. Computer environment 1600 software may include code 1650 to implement the modalities of the invention as described herein. For example, although not shown as part of working memory 1640, certain functionality of other subsystems (for example, interface subsystem 1515, storage subsystem 1527, etc.) can be implemented with any suitable combination of hardware and software, including using code 1650 stored in working memory 1640. [0082] Figure 17 shows a flow diagram of an illustrative method 1700 to provide access control using biometric sensor modalities described here. The modalities start at stage 1704 upon detection of a biometric acquisition trigger. The biometric acquisition trigger can include presence detection, interaction with an electromechanical interface integrated with the sensor, alerts, etc. In stage 1708, source illumination can be passed from an illumination subsystem (for example, one or more illumination sources that may or may not be coupled with optical guides, or the like) to one or more first surface regions of the alleged an individual's tissue through an interface subsystem (for example, a fiber optic plate, face plate, etc.). Response illumination can be received in a detector subsystem under a plurality of optical imaging conditions at stage 1712. Response illumination can be illumination that exits one or more of the second surface regions of the alleged tissue (i.e. , different from the first surface regions) in response to the interactions between the source illumination and the subsurface characteristics of the supposed fabric. For example, implementations can be configured so that the source illumination cannot be received by the detection subsystem until it first passes through the supposed fabric's subsurface, thereby becoming response illumination. In stage 1716, at least one image can be acquired corresponding to the plurality of optical imaging conditions and based on the response illumination received. In several modalities, the acquired image can be processed to perform one or more functions, such as the identification of a user, the verification of a user's identity, the estimation of a user's demographic characteristics, making access to secure assets (for example , locations, devices, safe materials, etc.), etc. [0083] The methods disclosed herein include one or more actions to obtain the described method. The method and / or the actions can be interchanged with each other without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and / or use of the specific actions can be modified without departing from the scope of the claims. [0084] Other examples and implementations are within the scope and spirit of the disclosure and the attached claims. For example, features implementing functions can also be physically located in various positions, including distributed in such a way that parts of the functions are implemented in different physical locations. In addition, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A , B or C ”means A or B or C or AB or BC or ABC (ie, A and B and C). In addition, the term "exemplary" does not mean that the example described is preferred or better than other examples. [0085] Various changes, substitutions, and changes to the techniques described herein can be made without departing from the teaching technology as described by the attached claims. In addition, the scope of the disclosure and claims is not limited to the specific aspects of the process, machine, fabrication, material composition, means, methods, and actions described above. Processes, machines, manufacturing, material compositions, means, methods or actions, currently existing or to be further developed, that perform substantially the same function or achieve substantially the same result as the corresponding aspects described here can be used. Consequently, the appended claims include in its scope such processes, machines, manufacturing, compositions of matter, means, methods or actions.
权利要求:
Claims (15) [0001] 1. Biometric access system, characterized by the fact that it comprises: a biometric sensor (100) comprising: an interface subsystem (109) comprising an interface surface arranged to contact the supposed tissue (101) of an individual; a lighting subsystem (103a, 103b) arranged to pass source lighting to an individual's supposed fabric (101) via a set of first surface regions (111a, 111b) of the supposed fabric (101); a detector subsystem (107) arranged to acquire a plurality of images corresponding to a plurality of optical image formation conditions, the plurality of images based on response illumination coming out of a set of second surface regions (113) of the supposed fabric (101), the set of second surface regions (113) being different from the set of first surface regions (111a, 111b), the response lighting produced by interactions between the source lighting and the supposed subsurface characteristics fabric (101), in which the detector system is arranged to acquire the plurality of images corresponding to the plurality of optical imaging conditions, so that each image is acquired under a different combination of illumination wavelengths and exposure times , and one or more processors (1505) to direct the operation of the lighting subsystem (103a, 103b) and the detector subsystem (107) to pass air source illumination of different wavelengths and exposure times to the supposed tissue (101) through the set of first surface regions (111a, 111b) and acquire the plurality of images based on the response lighting coming out of the set of seconds surface regions (113) produced by the interactions between source illumination of different wavelengths and exposure times and subsurface characteristics of the supposed fabric (101). [0002] 2. Biometric access system, according to claim 1, characterized by the fact that the interface subsystem (109) further comprises: a shield arranged to block source illumination, preventing it from reaching the detector subsystem (107) without first enter the supposed fabric (101). [0003] 3. Biometric access system, according to claim 1, characterized by the fact that the interface subsystem (109) further comprises: a set of optical guides (503a, 503b) arranged to guide the source illumination from the subsystem of illumination (103a, 103b) for the set of first surface regions (111a, 111b), and / or from the interface subsystem (109, 501) to the detector subsystem (107), thereby blocking the illumination of source, preventing it from reaching the detector subsystem (107) without first entering the supposed fabric (101). [0004] 4. Biometric access system, according to claim 3, characterized by the fact that the set of optical guides (503a, 503b) is based on an optical fiber front plate, and the interface surface is integrated with the front plate of optical fibers. [0005] 5. Biometric access system according to any one of the preceding claims, characterized by the fact that the detector subsystem (107) is additionally arranged to detect the presence of the desired skin location in relation to the interface surface before acquiring the plurality of images. [0006] 6. Biometric access system, according to any one of the preceding claims, characterized by the fact that: the set of first surface regions (111a, 111b) comprises a plurality of first surface regions (111a, 111b); the lighting subsystem (103a, 103b) comprises a plurality of light sources (505a, 505b); and each light source (505a, 505b) is arranged to pass a respective part of the source lighting to the supposed fabric (101) through a respective region of the first surface regions (111a, 11b). [0007] 7. Biometric access system according to claim 6, characterized by the fact that each light source (505a, 505b) is arranged to provide a different wavelength of illumination. [0008] 8. Biometric access system according to any one of the preceding claims, characterized by the fact that the detector subsystem (107) comprises an image formation arrangement arranged to generate the plurality of images, the image formation arrangement having an array of pixels that provides information that is spatially correlated with the image regions formed from the supposed tissue (101). [0009] 9. Biometric access system, according to any of the preceding claims, characterized by the fact that one or more processors (1505) are arranged to differentiate between the alleged fabric (101) being genuine fabric and an imitation according to an analysis spectral space of the plurality of images. [0010] 10. Biometric access system, according to any of the preceding claims, characterized by the fact that one or more processors (1505) are arranged to biometrically authorize the individual by processing the plurality of images. [0011] 11. Biometric access system, according to claim 10, characterized by the fact that one or more processors (1505) are arranged to biometrically authorize the individual by: computing a first plurality of key points in the plurality of images; and comparing the first plurality of key points with a second plurality of key points associated with the previously stored biometric information. [0012] 12. Biometric access system, according to claim 10, characterized by the fact that: the plurality of images comprises multispectral information; and each of the first plurality of key points is computed as a corner on a spatial gradient of at least one of the plurality of images. [0013] 13. Biometric access system, according to claim 10, characterized by the fact that the biometric sensor (100) is a miniaturized biometric sensor (100), and the individual's previous biometric record was performed on a biometric sensor (100) not miniaturized. [0014] 14. Biometric access system, according to claim 10, characterized by the fact that it also comprises: a physical interface (1301, 1401) of a portable communication device, the physical interface (1301, 1401) having the biometric sensor (100 ) integrated with it, and the physical interface (1301, 1401) arranged to allow access by the individual to the portable communication device when the individual is authorized biometrically by the biometric sensor (100). [0015] 15. Biometric access system, according to claim 10, characterized by the fact that it also comprises: a set of mobile mechanical locking between the locked and unlocked positions to allow and not allow the opening of a restricted access area; and a handle to open the restricted access area when unlocked, in which the biometric sensor (100) is physically integrated with the handle and in communication with the mechanical locking assembly, so that the mechanical locking assembly is movable to its position unlocked according to the fact that the individual is authorized biometrically by the biometric sensor (100).
类似技术:
公开号 | 公开日 | 专利标题 BR112016007929B1|2021-03-02|biometric access system EP1700246B1|2012-01-18|Methods and systems for estimation of personal characteristics from biometric measurements US9659205B2|2017-05-23|Multimodal imaging system and method for non-contact identification of multiple biometric traits US8229185B2|2012-07-24|Hygienic biometric sensors CA2699317C|2015-01-06|Spectroscopic method and system for multi-factor biometric authentication KR101484566B1|2015-01-20|Biometrics based on locally consistent features JP2008501196A|2008-01-17|Multispectral imaging biometrics US7835554B2|2010-11-16|Multispectral imaging biometrics US20150205992A1|2015-07-23|Multispectral imaging biometrics KR20070103753A|2007-10-24|Biometric recognition/verification using multispectral imaging KR20060002923A|2006-01-09|Multispectral biometric sensor JP2009544106A|2009-12-10|Spectrum biometric sensor AU2006285023A1|2007-03-08|Biometric sensors DE112008001530T5|2010-05-20|Contactless multispectral biometric acquisition US10380408B2|2019-08-13|Method of detecting fraud Pishva2011|Use of spectral biometrics for aliveness detection JP2008527544A|2008-07-24|Biometric recognition / verification using multispectral imaging Pishva2007|Spectroscopic approach for aliveness detection in biometrics authentication Pishva2007|Multi-factor authentication using spectral biometrics
同族专利:
公开号 | 公开日 EP3055693A4|2017-06-21| WO2015054686A1|2015-04-16| CN105980853B|2018-07-06| EP3055693A1|2016-08-17| CN105980853A|2016-09-28| US9886617B2|2018-02-06| US20150254495A1|2015-09-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6081612A|1997-02-28|2000-06-27|Electro Optical Sciences Inc.|Systems and methods for the multispectral imaging and characterization of skin tissue| WO2000019383A2|1998-09-11|2000-04-06|Loquitor Technologies Llc|Generation and detection of induced current using acoustic energy| US7890158B2|2001-06-05|2011-02-15|Lumidigm, Inc.|Apparatus and method of biometric determination using specialized optical spectroscopy systems| JP3751872B2|2001-10-30|2006-03-01|日本電気株式会社|Fingerprint input device| DE10246411A1|2002-10-05|2004-04-15|ASTRA Gesellschaft für Asset Management mbH & Co. KG|Device for the contactless optical detection of biometric properties of at least one part of the body| US7627151B2|2003-04-04|2009-12-01|Lumidigm, Inc.|Systems and methods for improved biometric feature definition| US7347365B2|2003-04-04|2008-03-25|Lumidigm, Inc.|Combined total-internal-reflectance and tissue imaging systems and methods| US7751594B2|2003-04-04|2010-07-06|Lumidigm, Inc.|White-light spectral biometric sensors| US7545963B2|2003-04-04|2009-06-09|Lumidigm, Inc.|Texture-biometrics sensor| WO2004090786A2|2003-04-04|2004-10-21|Lumidigm, Inc.|Multispectral biometric sensor| US7358515B2|2003-09-05|2008-04-15|Authentec, Inc.|Multi-biometric finger sensor including electric field sensing pixels and associated methods| US7263213B2|2003-12-11|2007-08-28|Lumidigm, Inc.|Methods and systems for estimation of personal characteristics from biometric measurements| US7460696B2|2004-06-01|2008-12-02|Lumidigm, Inc.|Multispectral imaging biometrics| US7801338B2|2005-04-27|2010-09-21|Lumidigm, Inc.|Multispectral biometric sensors| US20080013805A1|2006-07-17|2008-01-17|Authentec, Inc.|Finger sensing device using indexing and associated methods| US7995808B2|2006-07-19|2011-08-09|Lumidigm, Inc.|Contactless multispectral biometric capture| US7899217B2|2006-07-19|2011-03-01|Lumidign, Inc.|Multibiometric multispectral imager| JP2010522379A|2007-03-21|2010-07-01|ルミダイムインコーポレイテッド|Biometric authentication based on locally invariant features| EP2496910A4|2009-11-04|2016-11-16|Technologies Numetrix Inc|Device and method for obtaining three-dimensional object surface data| WO2011100733A1|2010-02-12|2011-08-18|Digitus Biometrics Inc.|Lockable enclosure having improved access system| US8570149B2|2010-03-16|2013-10-29|Lumidigm, Inc.|Biometric imaging using an optical adaptive interface| RU2623795C2|2011-08-22|2017-06-29|АЙЛОК ЭлЭлСи|Systems and methods for capturing non-artifact images| US8971593B2|2011-10-12|2015-03-03|Lumidigm, Inc.|Methods and systems for performing biometric functions| US8437513B1|2012-08-10|2013-05-07|EyeVerify LLC|Spoof detection for biometric authentication| US9225919B2|2014-02-25|2015-12-29|Semiconductor Components Industries, Llc|Image sensor systems and methods for multiple exposure imaging| US20150337571A1|2014-04-30|2015-11-26|STRATTEC Advanced Logic|Handle with integrated biometrics and vehicular biometric device|WO2017038191A1|2015-09-03|2017-03-09|日本電気株式会社|Living body recognition system, living body recognition method, and living body recognition program| KR102202342B1|2016-06-09|2021-01-12|인사이트 시스템즈|All-in-one light-emitting display and sensor to detect biological properties| US10303865B2|2016-08-31|2019-05-28|Redrock Biometrics, Inc.|Blue/violet light touchless palm print identification| DE102016218579A1|2016-09-27|2018-03-29|Audi Ag|Motor vehicle and method for identifying a person opening a motor vehicle| KR20180083700A|2017-01-13|2018-07-23|삼성전자주식회사|electronic device including biometric sensor| CN207851852U|2018-01-23|2018-09-11|金佶科技股份有限公司|Electronic device and its taken module| US10969255B2|2018-04-20|2021-04-06|Darrel Eugene Self|TIC environmental event sensor| US10685204B2|2018-07-27|2020-06-16|Qualcomm Incorporated|Biometric age estimation via ultrasonic imaging| US20200394380A1|2019-06-12|2020-12-17|Novatek Microelectronics Corp.|Optical fingerprint sensing device and operation method thereof| WO2021195896A1|2020-03-30|2021-10-07|华为技术有限公司|Target recognition method and device| KR20220000160A|2020-06-25|2022-01-03|삼성전자주식회사|Apparatus and method for analyzing substance of object| US11238265B1|2020-09-11|2022-02-01|Novatek Microelectronics Corp.|Electronic device with fingerprint sensing function and fingerprint sensing module|
法律状态:
2018-02-27| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2018-04-03| B25C| Requirement related to requested transfer of rights|Owner name: LUMIDIGM, INC. (US) | 2018-07-03| B25A| Requested transfer of rights approved|Owner name: HID GLOBAL CORPORATION (US) | 2019-10-29| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: G01N 33/52 , G01N 21/27 , G01B 7/28 , A61B 5/117 Ipc: G01B 7/28 (1968.09), G01N 21/27 (1980.01), A61B 5/ | 2020-07-21| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-12-08| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-03-02| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/10/2014, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201361890091P| true| 2013-10-11|2013-10-11| US61/890,091|2013-10-11| US201461947293P| true| 2014-03-03|2014-03-03| US61/947,293|2014-03-03| PCT/US2014/060271|WO2015054686A1|2013-10-11|2014-10-13|Miniaturized optical biometric sensing| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|